humanity institute
'Eugenics on steroids': the toxic and contested legacy of Oxford's Future of Humanity Institute
Two weeks ago it was quietly announced that the Future of Humanity Institute, the renowned multidisciplinary research centre in Oxford, no longer had a future. It shut down without warning on 16 April. Initially there was just a brief statement on its website stating it had closed and that its research may continue elsewhere within and outside the university. The institute, which was dedicated to studying existential risks to humanity, was founded in 2005 by the Swedish-born philosopher Nick Bostrom and quickly made a name for itself beyond academic circles – particularly in Silicon Valley, where a number of tech billionaires sang its praises and provided financial support. Bostrom is perhaps best known for his bestselling 2014 book Superintelligence, which warned of the existential dangers of artificial intelligence, but he also gained widespread recognition for his 2003 academic paper "Are You Living in a Computer Simulation?".
- North America > United States > California (0.26)
- North America > The Bahamas (0.15)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
The Best Books on Artificial Intelligence
I've read a couple of your books now, and what I want to know is this: Do you really think that artificial intelligence is a threat to the human race and could lead to our extinction? Yes, I do, but it also has the potential for enormous benefit. I do think it's probably going to be either very, very good for us or very, very bad. It's a bit like a strange attractor in chaos theory, the outcomes in the middle seem less likely. I'm reasonably hopeful because what will determine whether it's very good or very bad is largely us. We have time, certainly before artificial general intelligence (AGI) arrives. AGI is an artificial intelligence (AI) that has human-level cognitive ability, so can outperform us--or at least equal us--in every area of cognitive ability that we have. It also has volition and may be conscious, although that's not necessary. We have time before that arrives: We have time to make sure it's safe. At the same time as having scary potential, AI also brings the possibility of immortality and living forever by uploading your brain. Is that something you think will happen at some point? I certainly hope it will. Things like immortality, the complete end of poverty, the abolition of suffering, are all part of the very, very good outcome, if we get it right. If you have a superintelligence that is many, many times smarter than the smartest human, it could solve many of our problems. Problems like ageing and how to upload a mind into a computer, do seem, in principle, solvable. So yes, I do think they are realistic.
How Close Is Humanity to the Edge?
In mid-January, Toby Ord, a philosopher and senior research fellow at Oxford University, was reviewing the final proofs for his first book, "The Precipice: Existential Risk and the Future of Humanity." Ord works in the university's Future of Humanity Institute, which specializes in considering our collective fate. He had noticed that a few of his colleagues--those who worked on "bio-risk"--were tracking a new virus in Asia. Occasionally, they e-mailed around projections, which Ord found intriguing, in a hypothetical way. Among other subjects, "The Precipice" deals with the risk posed to our species by pandemics both natural and engineered.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.24)
- Asia (0.24)
- North America > United States > New York (0.04)
- North America > Canada > Quebec > Montreal (0.04)
- Health & Medicine > Therapeutic Area > Infections and Infectious Diseases (0.69)
- Government (0.69)
Nick Bostrom: Simulation and Superintelligence AI Podcast #83 with Lex Fridman
Nick Bostrom is a philosopher at University of Oxford and the director of the Future of Humanity Institute. He has worked on fascinating and important ideas in existential risks, simulation hypothesis, human enhancement ethics, and the risks of superintelligent AI systems, including in his book Superintelligence. I can see talking to Nick multiple times on this podcast, many hours each time, but we have to start somewhere. This conversation is part of the Artificial Intelligence podcast.
AI passes Go: where next for China's artificial intelligence ambitions?
In July 2017, China published its Next Generation AI Development Plan. As analysts such as Jeff Ding of Oxford University point out, it was not a green-field programme. It rather sought to gather and focus a diverse set of existing initiatives in response to what local academics called a'Sputnik moment': the 2016 defeat of the world's best Go player, Lee Sedol, by a Google-owned AI. For context, China's AI industry was estimated to be worth around RMB15bn (£1.7bn) when the plan was released. The plan often provokes sceptical responses outside China but the country has made significant advances toward the 2020 goal (indeed large parts of it have arguably already been achieved).
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.25)
- Asia > China > Beijing > Beijing (0.05)
- Asia > China > Shanghai > Shanghai (0.05)
- (6 more...)
- Information Technology > Services (0.69)
- Government > Military (0.69)
- Banking & Finance > Trading (0.69)
- (2 more...)
When And How AI Will Go Out Of Control According To 50 Experts
These are just a few ways the world's top researchers and industry leaders have described the threat that artificial intelligence poses to mankind. Will AI enhance our lives or completely upend them? There's no way around it -- artificial intelligence is changing human civilization, from how we work to how we travel to how we enforce laws. As AI technology advances and seeps deeper into our daily lives, its potential to create dangerous situations is becoming more apparent. A Tesla Model 3 owner in California died while using the car's Autopilot feature.
- North America > United States > California (0.25)
- North America > United States > Massachusetts (0.05)
- North America > United States > Arizona (0.05)
- Transportation > Passenger (1.00)
- Transportation > Ground > Road (1.00)
- Transportation > Electric Vehicle (0.90)
- Automobiles & Trucks > Manufacturer (0.90)
The Futuremakers podcast
We live in ever-changing times, so information we can trust is more important than ever before, and it's not always where our academics agree that's most revealing, but where they disagree. Futuremakers is the fly on the wall to that debate. You may already have read a hundred articles about artificial intelligence and the future of society, but these longer conversations – featuring four of our academics at the cutting edge of research and at the forefront of their profession – explore each topic in detail, from the automation of jobs to the inherent bias of algorithms. In 2013 two Oxford academics published a paper titled'The Future of Employment: How Susceptible Are Jobs to Computerisation?' estimating that 47% of US jobs were at risk of automation. Since then, numerous studies have emerged, arriving at very different conclusions.
- Asia > China (0.06)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- Health & Medicine (1.00)
- Government (0.98)
- Banking & Finance > Trading (0.48)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Mobile (0.41)
More Americans in favor of AI than fear it
Artificial intelligence is likely to be the defining technology of the century, affecting everything from war to jobs to health care. So, understanding what the general public wants from AI is important. A new survey suggests that while there's no strong consensus on the topic, more Americans are in favor of AI than actively oppose it. In polling organized by the University of Oxford's Future of Humanity Institute, forty-one percent of respondents said they somewhat or strongly supported the development of AI, while 22 percent said they somewhat or strongly opposed it. The remaining 28 percent said they had no strong feelings one way or the other.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.26)
- North America > United States (0.18)
- Information Technology > Security & Privacy (0.34)
- Education > Educational Setting > K-12 Education (0.34)
Experts Warn on Malicious Use of Artificial Intelligence
The world must prepare for potential malicious use of artificial intelligence (AI) by rogue states, criminals and terrorists, according to a report by a group of 26 security experts. Forecasting rapid growth in cybercrime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of bots to manipulate everything from elections to the news agenda and social media, the report calls for governments and corporations worldwide to address the danger inherent in the myriad applications of AI. The report – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation – also recommends interventions to mitigate the threats posed by the malicious use of AI. The report says that AI has many positive applications, but it is a dual-use technology and AI researchers and engineers should be proactive about the potential for its misuse. Policymakers and technical researchers need to work together now to understand and prepare for the malicious use of AI, according to the authors.
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.06)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.06)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
Future of Humanity Institute
This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China's AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China's AI approach to the drivers of AI development (e.g. In addition, it benchmarks China's current AI capabilities by developing a novel index to measure any country's AI potential and highlights the potential implications of China's AI dream for issues of AI safety, national security, economic development, and social governance. The author, Jeffrey Ding, writes, "The hope is that this report can serve as a foundational document for further policy discussion and research on the topic of China's approach to AI." The report draws from the author's translations of Chinese texts on AI policy, a compilation of metrics on China's AI capabilities compared to other countries, and conversations with those who have consulted with Chinese companies and institutions involved in shaping the AI scene. To access the report, click here.